2025-11-19 11:00:00 | America/New_York

Yingsi Qin Carnegie Mellon University

Spatially Programmable Lensing

At the heart of most optical systems is a lens that produces a single fixed focal plane. This assumption limits both imaging and displays: cameras must trade off aperture, depth of field, and diffraction, while VR displays project all pixels to a single accommodation distance, preventing natural focus cues and causing visual discomfort. These challenges exist because a lens creates a single global focal plane for all pixels—one that can move but cannot change shape. This talk explores a new class of optical systems that challenges this convention by allowing spatially varying focus across the sensor or display. The first part introduces Split-Lohmann Multifocal Display, a near-eye display that can simultaneously place individual pixels to different depths, fully supporting the natural focusing of the eye. This technique enables real-time streaming of 3D content over a large depth range at high spatial resolution, offering an exciting step towards a more immersive and interactive 3D viewing experience. The second part presents Spatially-Varying Autofocus, a technique that focuses different parts of the sensor onto different depths in the scene, enabling an arbitrary focal surface while maintaining a large aperture and high spatial resolution. Our prototype demonstrates real-time, optical all-in-focus imaging. Together, these optical systems advance the idea of depth as a spatially programmable dimension, opening new possibilities for imaging and display applications, including 3D sensing, autonomous driving, microscopy, and immersive displays.

Speaker's Bio

Yingsi is a PhD candidate in Electrical and Computer Engineering at Carnegie Mellon University, advised by Prof. Aswin Sakaranarayanan and Prof. Matthew O’Toole. Her research focuses on designing and building next-generation computational imaging and 3D display systems. By integrating computer vision, optics, signal processing, and machine learning, she creates new approaches to capture, process, and visualize three-dimensional information for mixed reality and machine vision applications. Yingsi's work has been recognized with the Best Paper Award at SIGGRAPH 2023, the Best Demo Award at ICCP 2023, and the Best Paper Honorable Mention Award at ICCV 2025. Yingsi is also a recipient of the Tan Endowed Graduate Fellowship and the James Sprague Presidential Fellowship at Carnegie Mellon University. Prior to CMU, Yingsi obtained her Bachelor of Science in Computer Science from Columbia University and her Bachelor of Arts in Physics from Colgate University. She was a research intern at Meta Reality Labs in the Display Systems Research team (2024, 2025) and Snap Research in the Computational Imaging team (2020). She was also a software engineering intern at Google Search (2019).